LLM API Key Security

This page explains how to configure LLM providers in Annolid while keeping API keys safe.

What Annolid Stores

Annolid stores provider/model preferences in:

  • ~/.annolid/llm_settings.json

Annolid does not persist API keys to this file. Secret fields such as api_key, token, and access_token are scrubbed before writing settings.

File Permission Hardening

Annolid enforces strict permissions for LLM settings storage:

  • ~/.annolid is set to 700

  • ~/.annolid/llm_settings.json is set to 600

This limits read access to your user account.

Safe Operational Practices

  • Never commit .env, API keys, or copied key values to git.

  • Keep ~/.annolid/.env and project .env out of source control.

  • Avoid pasting keys into logs, screenshots, or issue trackers.

  • Rotate keys immediately if accidentally exposed.

  • Use separate keys per environment (dev, test, production).

  • Prefer least-privilege tokens and provider-side usage limits.

Verify Configuration

You can verify that keys are available without writing them to disk:

python -c "import os; print(bool(os.getenv('OPENAI_API_KEY')), bool(os.getenv('GEMINI_API_KEY') or os.getenv('GOOGLE_API_KEY')))"

If either value prints False, configure the corresponding environment variable and restart Annolid.

You can also run Annolid’s built-in security check:

annolid agent-security-check

The command reports:

  • settings file/dir permission posture,

  • whether any secret-like fields were persisted,

  • OpenAI/Gemini key availability (including env var fallback).

Exit code is 0 for ok, 1 for warning.

Adding New Providers in GUI

In AI Model Settings, click Add Provider to create a new OpenAI-compatible provider (for example NVIDIA NIM):

  • provider id (e.g. nvidia)

  • display name

  • base URL (e.g. https://integrate.api.nvidia.com/v1)

  • API key env var name (e.g. NVIDIA_API_KEY)

After saving, the provider appears in the provider selector without code changes.